Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Many scientific domains gather sufficient labels to train machine algorithms through human-in-the-loop techniques provided by the Zooniverse.org citizen science platform. As the range of projects, task types and data rates increase, acceleration of model training is of paramount concern to focus volunteer effort where most needed. The application of Transfer Learning (TL) between Zooniverse projects holds promise as a solution. However, understanding the effectiveness of TL approaches that pretrain on large-scale generic image sets vs. images with similar characteristics possibly from similar tasks is an open challenge. We apply a generative segmentation model on two Zooniverse project-based data sets: (1) to identify fat droplets in liver cells (FatChecker; FC) and (2) the identification of kelp beds in satellite images (Floating Forests; FF) through transfer learning from the first project. We compare and contrast its performance with a TL model based on the COCO image set, and subsequently with baseline counterparts. We find that both the FC and COCO TL models perform better than the baseline cases when using >75% of the original training sample size. The COCO-based TL model generally performs better than the FC-based one, likely due to its generalized features. Our investigations provide important insights into usage of TL approaches on multi-domain data hosted across different Zooniverse projects, enabling future projects to accelerate task completion.
translated by 谷歌翻译
Recent 3D-based manipulation methods either directly predict the grasp pose using 3D neural networks, or solve the grasp pose using similar objects retrieved from shape databases. However, the former faces generalizability challenges when testing with new robot arms or unseen objects; and the latter assumes that similar objects exist in the databases. We hypothesize that recent 3D modeling methods provides a path towards building digital replica of the evaluation scene that affords physical simulation and supports robust manipulation algorithm learning. We propose to reconstruct high-quality meshes from real-world point clouds using state-of-the-art neural surface reconstruction method (the Real2Sim step). Because most simulators take meshes for fast simulation, the reconstructed meshes enable grasp pose labels generation without human efforts. The generated labels can train grasp network that performs robustly in the real evaluation scene (the Sim2Real step). In synthetic and real experiments, we show that the Real2Sim2Real pipeline performs better than baseline grasp networks trained with a large dataset and a grasp sampling method with retrieval-based reconstruction. The benefit of the Real2Sim2Real pipeline comes from 1) decoupling scene modeling and grasp sampling into sub-problems, and 2) both sub-problems can be solved with sufficiently high quality using recent 3D learning algorithms and mesh-based physical simulation techniques.
translated by 谷歌翻译
医学图像分割模型的性能指标用于衡量参考注释和预测之间的一致性。在开发此类模型中,使用了一组通用指标,以使结果更具可比性。但是,公共数据集中的分布与临床实践中遇到的案例之间存在不匹配。许多常见的指标无法衡量这种不匹配的影响,尤其是对于包含不确定,小或空参考注释的临床数据集。因此,可能无法通过此类指标来验证模型在临床上有意义的一致性。评估临床价值的维度包括独立于参考注释量的大小,考虑参考注释的不确定性,体积计和/或位置一致性的奖励以及对空参考注释正确分类的奖励。与普通的公共数据集不同,我们的内部数据集更具代表性。它包含不确定的,小或空的参考注释。我们研究了有关深度学习框架的预测的公开度量指标,以确定哪些设置共同指标可提供有意义的结果。我们将公共基准数据集进行比较而没有不确定,小或空参考注释。该代码将发布。
translated by 谷歌翻译
噪声的去除或取消对成像和声学具有广泛的应用。在日常生活中,Denoising甚至可能包括对地面真理不忠的生成方面。但是,对于科学应用,denoing必须准确地重现地面真相。在这里,我们展示了如何通过深层卷积神经网络来定位数据,从而以定量精度出现弱信号。特别是,我们研究了晶体材料的X射线衍射。我们证明,弱信号是由电荷排序引起的,在嘈杂的数据中微不足道的信号,在DeNo的数据中变得可见和准确。通过对深度神经网络的监督培训,具有成对的低噪声数据,可以通过监督培训来实现这一成功。这样,神经网络就可以了解噪声的统计特性。我们证明,使用人造噪声(例如泊松和高斯)不会产生这种定量准确的结果。因此,我们的方法说明了一种实用的噪声过滤策略,可以应用于具有挑战性的获取问题。
translated by 谷歌翻译
隐私已成为机器学习的主要问题。实际上,联合学习是出于隐私问题而激发的,因为它不允许传输私人数据,而仅传输中间更新。但是,联邦学习并不总是保证隐私保护,因为中间更新也可能揭示敏感信息。在本文中,我们对高斯混合模型的联合期望最大化算法进行了明确的信息理论分析,并证明了中间更新可能导致严重的隐私泄漏。为了解决隐私问题,我们提出了一个完全分散的隐私解决方案,该解决方案能够安全地计算每个最大化步骤中的更新。此外,我们考虑了两种不同类型的安全攻击:诚实但有趣而窃听的对手模型。数值验证表明,就准确性和隐私水平而言,与现有方法相比,所提出的方法具有优越的性能。
translated by 谷歌翻译
针对AI系统的对抗性例子通过恶意攻击和通过对抗性训练提高鲁棒性的机会构成了风险。在多种设置中,可以通过培训对抗代理以最大程度地减少受害者的奖励来制定对抗性政策。先前的工作研究了黑盒攻击,在这种攻击中,对手只看到州的观察结果,并有效地将受害者视为环境的任何其他部分。在这项工作中,我们实验白盒对抗性政策,以研究代理人的内部状态是否可以为其他代理提供有用的信息。我们做出三项贡献。首先,我们介绍了白盒对抗性政策,其中攻击者可以在每个时间步长观察受害者的内部状态。其次,我们证明了对受害者的白框访问可以在两种经纪环境中进行更好的攻击,从而导致对受害者的初始学习和更高的渐近表现。第三,我们表明,针对白盒对抗性策略的培训可用于使在单一环境中的学习者更强大,以使域转移更强大。
translated by 谷歌翻译
我们的世界越来越被具有不同自治程度的智能机器人所笼罩。为了将自己无缝整合到我们的社会中,即使在没有人类的直接投入的情况下,这些机器也应具有导航日常工作复杂性的能力。换句话说,我们希望这些机器人了解其合作伙伴的意图,以预测帮助他们的最佳方法。在本文中,我们介绍了Casper(社会感知和在机器人中参与的认知体系结构):一种象征性认知体系结构,使用定性的空间推理来预测另一个代理的追求目标并计算最佳的协作行为。这是通过平行过程的集合来执行的,该过程对低级动作识别和高级目标理解进行建模,这两者都经过正式验证。我们已经在模拟的厨房环境中测试了这种体系结构,我们收集的结果表明,机器人能够认识到一个持续的目标并适当合作实现其成就。这证明了对定性空间关系的新使用,该空间关系应用于人类机器人相互作用领域的意图阅读问题。
translated by 谷歌翻译
尽管经过多年的努力,但在经典数据的情况下,量子机学习社区只能显示出某些人为加密启发的数据集的量子学习优势。在本说明中,我们讨论了发现学习问题的挑战,即量子学习算法可以比任何经典学习算法更快学习,并研究如何识别此类学习问题。具体而言,我们反思了与此问题有关的计算学习理论中的主要概念,并讨论定义的细微变化在概念上意味着显着不同的任务,这可能会导致分离或根本没有分离。此外,我们研究了现有的学习问题,并具有可证明的量子加速,以提炼一组更一般和充分的条件(即``清单''),以表现出在经典学习者和量子学习者之间的分离的学习问题。这些清单旨在简化一个人的方法来证明学习问题或阐明瓶颈的量子加速。最后,为了说明其应用,我们分析了潜在分离的示例(即,当学习问题是从计算分离中或数据来自量子实验时)通过我们的方法的镜头进行分析。
translated by 谷歌翻译